Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
IEEE J Biomed Health Inform ; 28(3): 1185-1194, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38446658

ABSTRACT

Cancer begins when healthy cells change and grow out of control, forming a mass called a tumor. Head and neck (H&N) cancers usually develop in or around the head and neck, including the mouth (oral cavity), nose and sinuses, throat (pharynx), and voice box (larynx). 4% of all cancers are H&N cancers with a very low survival rate (a five-year survival rate of 64.7%). FDG-PET/CT imaging is often used for early diagnosis and staging of H&N tumors, thus improving these patients' survival rates. This work presents a novel 3D-Inception-Residual aided with 3D depth-wise convolution and squeeze and excitation block. We introduce a 3D depth-wise convolution-inception encoder consisting of an additional 3D squeeze and excitation block and a 3D depth-wise convolution-based residual learning decoder (3D-IncNet), which not only helps to recalibrate the channel-wise features but adaptively through explicit inter-dependencies modeling but also integrate the coarse and fine features resulting in accurate tumor segmentation. We further demonstrate the effectiveness of inception-residual encoder-decoder architecture in achieving better dice scores and the impact of depth-wise convolution in lowering the computational cost. We applied random forest for survival prediction on deep, clinical, and radiomics features. Experiments are conducted on the benchmark HECKTOR21 challenge, which showed significantly better performance by surpassing the state-of-the-artwork and achieved 0.836 and 0.811 concordance index and dice scores, respectively. We made the model and code publicly available.


Subject(s)
Head and Neck Neoplasms , Positron Emission Tomography Computed Tomography , Humans , Head and Neck Neoplasms/diagnostic imaging , Head , Neck , Face
2.
Med Image Anal ; 88: 102833, 2023 08.
Article in English | MEDLINE | ID: mdl-37267773

ABSTRACT

In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.


Subject(s)
Image Processing, Computer-Assisted , White Matter , Pregnancy , Female , Humans , Image Processing, Computer-Assisted/methods , Brain/diagnostic imaging , Head , Fetus/diagnostic imaging , Algorithms , Magnetic Resonance Imaging/methods
3.
IEEE J Biomed Health Inform ; 27(7): 3302-3313, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37067963

ABSTRACT

In recent years, several deep learning models have been proposed to accurately quantify and diagnose cardiac pathologies. These automated tools heavily rely on the accurate segmentation of cardiac structures in MRI images. However, segmentation of the right ventricle is challenging due to its highly complex shape and ill-defined borders. Hence, there is a need for new methods to handle such structure's geometrical and textural complexities, notably in the presence of pathologies such as Dilated Right Ventricle, Tricuspid Regurgitation, Arrhythmogenesis, Tetralogy of Fallot, and Inter-atrial Communication. The last MICCAI challenge on right ventricle segmentation was held in 2012 and included only 48 cases from a single clinical center. As part of the 12th Workshop on Statistical Atlases and Computational Models of the Heart (STACOM 2021), the M&Ms-2 challenge was organized to promote the interest of the research community around right ventricle segmentation in multi-disease, multi-view, and multi-center cardiac MRI. Three hundred sixty CMR cases, including short-axis and long-axis 4-chamber views, were collected from three Spanish hospitals using nine different scanners from three different vendors, and included a diverse set of right and left ventricle pathologies. The solutions provided by the participants show that nnU-Net achieved the best results overall. However, multi-view approaches were able to capture additional information, highlighting the need to integrate multiple cardiac diseases, views, scanners, and acquisition protocols to produce reliable automatic cardiac segmentation algorithms.


Subject(s)
Deep Learning , Heart Ventricles , Humans , Heart Ventricles/diagnostic imaging , Magnetic Resonance Imaging/methods , Algorithms , Heart Atria
4.
Sensors (Basel) ; 23(6)2023 Mar 08.
Article in English | MEDLINE | ID: mdl-36991629

ABSTRACT

Recently, significant progress has been achieved in developing deep learning-based approaches for estimating depth maps from monocular images. However, many existing methods rely on content and structure information extracted from RGB photographs, which often results in inaccurate depth estimation, particularly for regions with low texture or occlusions. To overcome these limitations, we propose a novel method that exploits contextual semantic information to predict precise depth maps from monocular images. Our approach leverages a deep autoencoder network incorporating high-quality semantic features from the state-of-the-art HRNet-v2 semantic segmentation model. By feeding the autoencoder network with these features, our method can effectively preserve the discontinuities of the depth images and enhance monocular depth estimation. Specifically, we exploit the semantic features related to the localization and boundaries of the objects in the image to improve the accuracy and robustness of the depth estimation. To validate the effectiveness of our approach, we tested our model on two publicly available datasets, NYU Depth v2 and SUN RGB-D. Our method outperformed several state-of-the-art monocular depth estimation techniques, achieving an accuracy of 85%, while minimizing the error Rel by 0.12, RMS by 0.523, and log10 by 0.0527. Our approach also demonstrated exceptional performance in preserving object boundaries and faithfully detecting small object structures in the scene.

5.
Entropy (Basel) ; 24(12)2022 Nov 23.
Article in English | MEDLINE | ID: mdl-36554113

ABSTRACT

To completely comprehend neurodevelopment in healthy and congenitally abnormal fetuses, quantitative analysis of the human fetal brain is essential. This analysis requires the use of automatic multi-tissue fetal brain segmentation techniques. This paper proposes an end-to-end automatic yet effective method for a multi-tissue fetal brain segmentation model called IRMMNET. It includes a inception residual encoder block (EB) and a dense spatial attention (DSAM) block, which facilitate the extraction of multi-scale fetal-brain-tissue-relevant information from multi-view MRI images, enhance the feature reuse, and substantially reduce the number of parameters of the segmentation model. Additionally, we propose three methods for predicting gestational age (GA)-GA prediction by using a 3D autoencoder, GA prediction using radiomics features, and GA prediction using the IRMMNET segmentation model's encoder. Our experiments were performed on a dataset of 80 pathological and non-pathological magnetic resonance fetal brain volume reconstructions across a range of gestational ages (20 to 33 weeks) that were manually segmented into seven different tissue categories. The results showed that the proposed fetal brain segmentation model achieved a Dice score of 0.791±0.18, outperforming the state-of-the-art methods. The radiomics-based GA prediction methods achieved the best results (RMSE: 1.42). We also demonstrated the generalization capabilities of the proposed methods for tasks such as head and neck tumor segmentation and the prediction of patients' survival days.

6.
Entropy (Basel) ; 24(9)2022 Sep 08.
Article in English | MEDLINE | ID: mdl-36141151

ABSTRACT

In image classification with Deep Convolutional Neural Networks (DCNNs), the number of parameters in pointwise convolutions rapidly grows due to the multiplication of the number of filters by the number of input channels that come from the previous layer. Existing studies demonstrated that a subnetwork can replace pointwise convolutional layers with significantly fewer parameters and fewer floating-point computations, while maintaining the learning capacity. In this paper, we propose an improved scheme for reducing the complexity of pointwise convolutions in DCNNs for image classification based on interleaved grouped filters without divisibility constraints. The proposed scheme utilizes grouped pointwise convolutions, in which each group processes a fraction of the input channels. It requires a number of channels per group as a hyperparameter Ch. The subnetwork of the proposed scheme contains two consecutive convolutional layers K and L, connected by an interleaving layer in the middle, and summed at the end. The number of groups of filters and filters per group for layers K and L is determined by exact divisions of the original number of input channels and filters by Ch. If the divisions were not exact, the original layer could not be substituted. In this paper, we refine the previous algorithm so that input channels are replicated and groups can have different numbers of filters to cope with non exact divisibility situations. Thus, the proposed scheme further reduces the number of floating-point computations (11%) and trainable parameters (10%) achieved by the previous method. We tested our optimization on an EfficientNet-B0 as a baseline architecture and made classification tests on the CIFAR-10, Colorectal Cancer Histology, and Malaria datasets. For each dataset, our optimization achieves a saving of 76%, 89%, and 91% of the number of trainable parameters of EfficientNet-B0, while keeping its test classification accuracy.

7.
Sensors (Basel) ; 22(14)2022 Jul 18.
Article in English | MEDLINE | ID: mdl-35891033

ABSTRACT

In current decades, significant advancements in robotics engineering and autonomous vehicles have improved the requirement for precise depth measurements. Depth estimation (DE) is a traditional task in computer vision that can be appropriately predicted by applying numerous procedures. This task is vital in disparate applications such as augmented reality and target tracking. Conventional monocular DE (MDE) procedures are based on depth cues for depth prediction. Various deep learning techniques have demonstrated their potential applications in managing and supporting the traditional ill-posed problem. The principal purpose of this paper is to represent a state-of-the-art review of the current developments in MDE based on deep learning techniques. For this goal, this paper tries to highlight the critical points of the state-of-the-art works on MDE from disparate aspects. These aspects include input data shapes and training manners such as supervised, semi-supervised, and unsupervised learning approaches in combination with applying different datasets and evaluation indicators. At last, limitations regarding the accuracy of the DL-based MDE models, computational time requirements, real-time inference, transferability, input images shape and domain adaptation, and generalization are discussed to open new directions for future research.


Subject(s)
Augmented Reality , Deep Learning , Forecasting
8.
Diagnostics (Basel) ; 12(5)2022 Apr 22.
Article in English | MEDLINE | ID: mdl-35626208

ABSTRACT

Breast cancer needs to be detected early to reduce mortality rate. Ultrasound imaging (US) could significantly enhance diagnosing cases with dense breasts. Most of the existing computer-aided diagnosis (CAD) systems employ a single ultrasound image for the breast tumor to extract features to classify it as benign or malignant. However, the accuracy of such CAD system is limited due to the large tumor size and shape variation, irregular and ambiguous tumor boundaries, and low signal-to-noise ratio in ultrasound images due to their noisy nature and the significant similarity between normal and abnormal tissues. To handle these issues, we propose a deep-learning-based radiomics method based on breast US sequences in this paper. The proposed approach involves three main components: radiomic features extraction based on a deep learning network, so-called ConvNeXt, a malignancy score pooling mechanism, and visual interpretations. Specifically, we employ the ConvNeXt network, a deep convolutional neural network (CNN) trained using the vision transformer style. We also propose an efficient pooling mechanism to fuse the malignancy scores of each breast US sequence frame based on image-quality statistics. The ablation study and experimental results demonstrate that our method achieves competitive results compared to other CNN-based methods.

9.
Diagnostics (Basel) ; 11(8)2021 Jul 31.
Article in English | MEDLINE | ID: mdl-34441319

ABSTRACT

BACKGROUND: The aim of the present study was to test our deep learning algorithm (DLA) by reading the retinographies. METHODS: We tested our DLA built on convolutional neural networks in 14,186 retinographies from our population and 1200 images extracted from MESSIDOR. The retinal images were graded both by the DLA and independently by four retina specialists. Results of the DLA were compared according to accuracy (ACC), sensitivity (S), specificity (SP), positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUC), distinguishing between identification of any type of DR (any DR) and referable DR (RDR). RESULTS: The results of testing the DLA for identifying any DR in our population were: ACC = 99.75, S = 97.92, SP = 99.91, PPV = 98.92, NPV = 99.82, and AUC = 0.983. When detecting RDR, the results were: ACC = 99.66, S = 96.7, SP = 99.92, PPV = 99.07, NPV = 99.71, and AUC = 0.988. The results of testing the DLA for identifying any DR with MESSIDOR were: ACC = 94.79, S = 97.32, SP = 94.57, PPV = 60.93, NPV = 99.75, and AUC = 0.959. When detecting RDR, the results were: ACC = 98.78, S = 94.64, SP = 99.14, PPV = 90.54, NPV = 99.53, and AUC = 0.968. CONCLUSIONS: Our DLA performed well, both in detecting any DR and in classifying those eyes with RDR in a sample of retinographies of type 2 DM patients in our population and the MESSIDOR database.

10.
J Pediatr Rehabil Med ; 14(2): 237-245, 2021.
Article in English | MEDLINE | ID: mdl-33720857

ABSTRACT

PURPOSE: To assess the changes in balance function in children with cerebral palsy (CP) after two weeks of daily training with personalized balance games. METHODS: Twenty-five children with CP, aged 5 to 18 years were randomly selected for experimental or control groups. Over a period of two weeks, all participants received 8-9 game sessions for 15-20 minutes, totaling 150-160 minutes. The experimental group used personalized balance games available from the GAmification for Better LifE (GABLE) online serious gaming platform. Children from the control group played Nintendo Wii games using a handheld Wii Remote. Both groups received the same background treatment. Recorded outcome measures were from a Trunk Control Measurement Scale (TCMS), Timed Up & Go Test (TUG), Center of Pressure Path Length (COP-PL), and Dynamic Balance Test (DBT). RESULTS: After two weeks of training in the experimental group TCMS scores increased by 4.5 points (SD = 3.5, p< 0.05) and DBT results increased by 0.88 points (IQR = 1.03, p< 0.05) while these scores did not change significantly in the control group. Overall, TUG and COP-PL scores were not affected in either group. CONCLUSION: This study demonstrates improvement of balancing function in children with CP after a two-week course of training with personalized rehabilitation computer games.


Subject(s)
Cerebral Palsy , Video Games , Adolescent , Child , Child, Preschool , Exercise Therapy , Humans , Pilot Projects , Postural Balance
11.
Diagnostics (Basel) ; 11(2)2021 Jan 22.
Article in English | MEDLINE | ID: mdl-33498999

ABSTRACT

COVID-19 is a fast-growing disease all over the world, but facilities in the hospitals are restricted. Due to unavailability of an appropriate vaccine or medicine, early identification of patients suspected to have COVID-19 plays an important role in limiting the extent of disease. Lung computed tomography (CT) imaging is an alternative to the RT-PCR test for diagnosing COVID-19. Manual segmentation of lung CT images is time consuming and has several challenges, such as the high disparities in texture, size, and location of infections. Patchy ground-glass and consolidations, along with pathological changes, limit the accuracy of the existing deep learning-based CT slices segmentation methods. To cope with these issues, in this paper we propose a fully automated and efficient deep learning-based method, called LungINFseg, to segment the COVID-19 infections in lung CT images. Specifically, we propose the receptive-field-aware (RFA) module that can enlarge the receptive field of the segmentation models and increase the learning ability of the model without information loss. RFA includes convolution layers to extract COVID-19 features, dilated convolution consolidated with learnable parallel-group convolution to enlarge the receptive field, frequency domain features obtained by discrete wavelet transform, which also enlarges the receptive field, and an attention mechanism to promote COVID-19-related features. Large receptive fields could help deep learning models to learn contextual information and COVID-19 infection-related features that yield accurate segmentation results. In our experiments, we used a total of 1800+ annotated CT slices to build and test LungINFseg. We also compared LungINFseg with 13 state-of-the-art deep learning-based segmentation methods to demonstrate its effectiveness. LungINFseg achieved a dice score of 80.34% and an intersection-over-union (IoU) score of 68.77%-higher than the ones of the other 13 segmentation methods. Specifically, the dice and IoU scores of LungINFseg were 10% better than those of the popular biomedical segmentation method U-Net.

12.
Diagnostics (Basel) ; 10(11)2020 Nov 23.
Article in English | MEDLINE | ID: mdl-33238512

ABSTRACT

Breast density estimation with visual evaluation is still challenging due to low contrast and significant fluctuations in the mammograms' fatty tissue background. The primary key to breast density classification is to detect the dense tissues in the mammographic images correctly. Many methods have been proposed for breast density estimation; nevertheless, most of them are not fully automated. Besides, they have been badly affected by low signal-to-noise ratio and variability of density in appearance and texture. This study intends to develop a fully automated and digitalized breast tissue segmentation and classification using advanced deep learning techniques. The conditional Generative Adversarial Networks (cGAN) network is applied to segment the dense tissues in mammograms. To have a complete system for breast density classification, we propose a Convolutional Neural Network (CNN) to classify mammograms based on the standardization of Breast Imaging-Reporting and Data System (BI-RADS). The classification network is fed by the segmented masks of dense tissues generated by the cGAN network. For screening mammography, 410 images of 115 patients from the INbreast dataset were used. The proposed framework can segment the dense regions with an accuracy, Dice coefficient, Jaccard index of 98%, 88%, and 78%, respectively. Furthermore, we obtained precision, sensitivity, and specificity of 97.85%, 97.85%, and 99.28%, respectively, for breast density classification. This study's findings are promising and show that the proposed deep learning-based techniques can produce a clinically useful computer-aided tool for breast density analysis by digital mammography.

13.
Comput Biol Med ; 127: 104049, 2020 12.
Article in English | MEDLINE | ID: mdl-33099218

ABSTRACT

Diabetic retinopathy (DR) has become a major worldwide health problem due to the increase in blindness among diabetics at early ages. The detection of DR pathologies such as microaneurysms, hemorrhages and exudates through advanced computational techniques is of utmost importance in patient health care. New computer vision techniques are needed to improve upon traditional screening of color fundus images. The segmentation of the entire anatomical structure of the retina is a crucial phase in detecting these pathologies. This work proposes a novel framework for fast and fully automatic blood vessel segmentation and fovea detection. The preprocessing method involved both contrast limited adaptive histogram equalization and the brightness preserving dynamic fuzzy histogram equalization algorithms to enhance image contrast and eliminate noise artifacts. Afterwards, the color spaces and their intrinsic components were examined to identify the most suitable color model to reveal the foreground pixels against the entire background. Several samples were then collected and used by the renowned convexity shape prior segmentation algorithm. The proposed methodology achieved an average vasculature segmentation accuracy exceeding 96%, 95%, 98% and 94% for the DRIVE, STARE, HRF and Messidor publicly available datasets, respectively. An additional validation step reached an average accuracy of 94.30% using an in-house dataset provided by the Hospital Sant Joan of Reus (Spain). Moreover, an outstanding detection accuracy of over 98% was achieved for the foveal avascular zone. An extensive state-of-the-art comparison was also conducted. The proposed approach can thus be integrated into daily clinical practice to assist medical experts in the diagnosis of DR.


Subject(s)
Diabetic Retinopathy , Retinal Vessels , Algorithms , Diabetic Retinopathy/diagnostic imaging , Fundus Oculi , Humans , Retinal Vessels/diagnostic imaging , Spain
14.
IEEE J Biomed Health Inform ; 24(3): 866-877, 2020 03.
Article in English | MEDLINE | ID: mdl-31199277

ABSTRACT

Recent studies have shown that the environment where people eat can affect their nutritional behavior [1]. In this paper, we provide automatic tools for personalized analysis of a person's health habits by the examination of daily recorded egocentric photo-streams. Specifically, we propose a new automatic approach for the classification of food-related environments, that is able to classify up to 15 such scenes. In this way, people can monitor the context around their food intake in order to get an objective insight into their daily eating routine. We propose a model that classifies food-related scenes organized in a semantic hierarchy. Additionally, we present and make available a new egocentric dataset composed of more than 33 000 images recorded by a wearable camera, over which our proposed model has been tested. Our approach obtains an accuracy and F-score of 56% and 65%, respectively, clearly outperforming the baseline methods.


Subject(s)
Food/classification , Image Processing, Computer-Assisted/methods , Photography/classification , Algorithms , Humans , Life Style , Machine Learning
15.
Telemed J E Health ; 26(8): 1001-1009, 2020 08.
Article in English | MEDLINE | ID: mdl-31682189

ABSTRACT

Background:To validate our deep learning algorithm (DLA) to read diabetic retinopathy (DR) retinographies.Introduction:Currently DR detection is made by retinography; due to its increasing diabetes mellitus incidence we need to find systems that help us to screen DR.Materials and Methods:The DLA was built and trained using 88,702 images from EyePACS, 1,748 from Messidor-2, and 19,230 from our own population. For validation a total of 38,339 retinographies from 17,669 patients (obtained from our DR screening databases) were read by a DLA and compared by four senior retina ophthalmologists for detecting any-DR and referable-DR. We determined the values of Cohen's weighted Kappa (CWK) index, sensitivity (S), specificity (SP), positive predictive value (PPV) and negative predictive value (NPV), and errors type I and II.Results:The results of the DLA to detect any-DR were: CWK = 0.886 ± 0.004 (95% confidence interval [CI] 0.879-0.894), S = 0.967%, SP = 0.976%, PPV = 0.836%, and NPV = 0.996%. The error type I = 0.024, and the error type II = 0.004. Likewise, the referable-DR results were: CWK = 0.809 (95% CI 0.798-0.819), S = 0.998, SP = 0.968, PPV = 0.701, NPV = 0.928, error type I = 0.032, and error type II = 0.001.Discussion:Our DLA can be used as a high confidence diagnostic tool to help in DR screening, especially when it might be difficult for ophthalmologists or other professionals to identify. It can identify patients with any-DR and those that should be referred.Conclusions:The DLA can be valid to aid in screening of DR.


Subject(s)
Deep Learning , Diabetes Mellitus , Diabetic Retinopathy , Ophthalmologists , Algorithms , Diabetic Retinopathy/diagnostic imaging , Diabetic Retinopathy/epidemiology , Diagnostic Techniques, Ophthalmological , Humans , Mass Screening , Sensitivity and Specificity
16.
Telemed J E Health ; 25(1): 31-40, 2019 01.
Article in English | MEDLINE | ID: mdl-29466097

ABSTRACT

BACKGROUND: The aim of this study was to build a clinical decision support system (CDSS) in diabetic retinopathy (DR), based on type 2 diabetes mellitus (DM) patients. METHOD: We built a CDSS from a sample of 2,323 patients, divided into a training set of 1,212 patients, and a testing set of 1,111 patients. The CDSS is based on a fuzzy random forest, which is a set of fuzzy decision trees. A fuzzy decision tree is a hierarchical data structure that classifies a patient into several classes to some level, depending on the values that the patient presents in the attributes related to the DR risk factors. Each node of the tree is an attribute, and each branch of the node is related to a possible value of the attribute. The leaves of the tree link the patient to a particular class (DR, no DR). RESULTS: A CDSS was built with 200 trees in the forest and three variables at each node. Accuracy of the CDSS was 80.76%, sensitivity was 80.67%, and specificity was 85.96%. Applied variables were current age, gender, DM duration and treatment, arterial hypertension, body mass index, HbA1c, estimated glomerular filtration rate, and microalbuminuria. DISCUSSION: Some studies concluded that screening every 3 years was cost effective, but did not personalize risk factors. In this study, the random forest test using fuzzy rules permit us to build a personalized CDSS. CONCLUSIONS: We have developed a CDSS that can help in screening diabetic retinopathy programs, despite our results more testing is essential.


Subject(s)
Decision Support Systems, Clinical/organization & administration , Decision Trees , Diabetic Retinopathy/diagnosis , Diabetic Retinopathy/epidemiology , Mass Screening/organization & administration , Age Factors , Age of Onset , Aged , Aged, 80 and over , Blood Pressure , Body Mass Index , Glycated Hemoglobin , Humans , Kidney Function Tests , Middle Aged , Prospective Studies , Risk Factors , Sensitivity and Specificity , Sex Factors
17.
Biomed Mater Eng ; 29(5): 551-566, 2018.
Article in English | MEDLINE | ID: mdl-30400071

ABSTRACT

Alzheimer is a degenerative disorder that attacks neurons, resulting in loss of memory, thinking, language skills, and behavioral changes. Computer-aided detection methods can uncover crucial information recorded by electroencephalograms. A systematic literature search presents the wavelet transform as a frequently used technique in Alzheimer's detection. However, it requires a defined basis function considered a significant problem. In this work, the concept of empirical mode decomposition is introduced as an alternative to process Alzheimer signals. The performance of empirical mode decomposition heavily relies on a parameter called threshold. In our previous works, we found that the existing thresholding techniques were not able to highlight relevant information. The use of Tsallis entropy as a thresholder is evaluated through the combination of empirical mode decomposition and neural networks. Thanks to the extraction of better features that boost the classification accuracy, the proposed approach outperforms the state-of-the-art in terms of peak signal to noise ratio and root mean square error. Hence, our methodology is more likely to succeed than methods based on other landmarks such as Bayes, Normal and Visu shrink. We finally report an accuracy rate of 80%, while the aforementioned techniques only yield performances of 65%, 60% and 40%, respectively.


Subject(s)
Alzheimer Disease/diagnosis , Electroencephalography/methods , Entropy , Signal Processing, Computer-Assisted , Algorithms , Bayes Theorem , Humans , Neural Networks, Computer , Pattern Recognition, Automated/methods , Wavelet Analysis
20.
Sci Rep ; 7(1): 14984, 2017 11 03.
Article in English | MEDLINE | ID: mdl-29101392

ABSTRACT

In this paper, a novel edge-based active contour method is proposed based on the difference of Gaussians (DoG) to segment intensity inhomogeneous images. DoG is known as a feature enhancement tool, which can enhance the edges of an image. However, in the proposed energy functional it is used as an edge-indicator parameter, which acts like a balloon force during the level-set curve evolution process. In the proposed formulation, the internal energy term penalizes the deviation of the level-set function from a signed distance function and external energy term evolves the contour towards the boundaries of the objects. There are three main advantages of the proposed method. First, image difference computed using the DoG function provides the global structure of an image, which helps to segment the image globally that the traditional edge-based methods are unable to do. Second, it has a low time complexity compared to the state-of-the-art active contours developed in the context of intensity inhomogeneity. Third, it is not sensitive to the initial position of contour. Experimental results using both synthetic and real brain magnetic resonance (MR) images show that the proposed method yields better segmentation results compared to the state-of-the-art.

SELECTION OF CITATIONS
SEARCH DETAIL
...